recurrence relation
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
INR-Bench: A Unified Benchmark for Implicit Neural Representations in Multi-Domain Regression and Reconstruction
Li, Linfei, Zhang, Fengyi, Wang, Zhong, Zhang, Lin, Shen, Ying
Implicit Neural Representations (INRs) have gained success in various signal processing tasks due to their advantages of continuity and infinite resolution. However, the factors influencing their effectiveness and limitations remain underexplored. To better understand these factors, we leverage insights from Neural Tangent Kernel (NTK) theory to analyze how model architectures (classic MLP and emerging KAN), positional encoding, and nonlinear primitives affect the response to signals of varying frequencies. Building on this analysis, we introduce INR-Bench, the first comprehensive benchmark specifically designed for multimodal INR tasks. It includes 56 variants of Coordinate-MLP models (featuring 4 types of positional encoding and 14 activation functions) and 22 Coordinate-KAN models with distinct basis functions, evaluated across 9 implicit multimodal tasks. These tasks cover both forward and inverse problems, offering a robust platform to highlight the strengths and limitations of different neural models, thereby establishing a solid foundation for future research. The code and dataset are available at https://github.com/lif314/INR-Bench.
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (8 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Sensing and Signal Processing > Image Processing (0.92)
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Asia > Middle East > Jordan (0.04)
Computing Linear Regions in Neural Networks with Skip Connections
Joyce, Johnny, Verschelde, Jan
A neural network is a composition of neurons, where each neuron can be represented as a nonlinear function depending on inputs and parameters, called weights and biases. The nonlinearity of the network can be understood via tropical geometry, in particular for networks with ReLU activation functions, which are piecewise linear. For such networks, we introduce an algorithm to compute all linear regions of a neural network. A linear region of a neural network is a connected region on which the map defined by the network is linear. Knowing those linear regions allows for quicker predictions, as demonstrated by our new caching algorithm. Our algorithms work for networks with skip connections. Skip connections add the output of previous layers to the input of later layers, skipping over the layers in between. The expository paper [2] offers promising avenues to study neural networks.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Appendices A Proofs A.1 Proof of Proposition
Here we proved that (1) and (2) are equivalent; (1) and (3) are equivalent. Proposition 3. 14 Lemma 2. Given With the lemma above, we now present the proof of Proposition 3. B.1 Example Implementation We provide an example implementation of Algorithm 2 in Listing 1. 17 1 Based on exponentiation by squaring. Best results are in bold. Based on the observation, Wei et al. Our method identified a different source of gradient vanishing caused by the small coefficients for higher-order terms in DAG constraints.